通过捕获来自宽频率范围的光谱数据以及空间信息,高光谱成像(HSI)可以检测温度,水分和化学成分方面的微小差异。因此,HSI已成功应用于各种应用,包括遥感安全和防御,植被和作物监测,食品/饮料和药品质量控制的精密农业。然而,对于碳纤维增强聚合物(CFRP)中的病症监测和损伤检测,HSI的使用是一个相对未受破坏的区域,因为现有的非破坏性测试(NDT)技术主要集中在提供有关结构的物理完整性但不对的信息材料组成。为此,HSI可以提供一种独特的方法来解决这一挑战。在本文中,通过使用近红外HSI相机,介绍了HSI对CFRP产品的非破坏性检查的应用,以EU H2020 FibreeUSE项目为背景。详细介绍了三种案例研究的技术挑战和解决方案,包括粘合剂残留检测,表面损伤检测和基于COBOT的自动检查。实验结果充分展示了HSI的巨大潜力和CFRP的NDT的相关视觉技术,特别是满足工业制造环境的潜力。
translated by 谷歌翻译
Purpose: The aim of this study was to demonstrate the utility of unsupervised domain adaptation (UDA) in automated knee osteoarthritis (OA) phenotype classification using a small dataset (n=50). Materials and Methods: For this retrospective study, we collected 3,166 three-dimensional (3D) double-echo steady-state magnetic resonance (MR) images from the Osteoarthritis Initiative dataset and 50 3D turbo/fast spin-echo MR images from our institute (in 2020 and 2021) as the source and target datasets, respectively. For each patient, the degree of knee OA was initially graded according to the MRI Osteoarthritis Knee Score (MOAKS) before being converted to binary OA phenotype labels. The proposed UDA pipeline included (a) pre-processing, which involved automatic segmentation and region-of-interest cropping; (b) source classifier training, which involved pre-training phenotype classifiers on the source dataset; (c) target encoder adaptation, which involved unsupervised adaption of the source encoder to the target encoder and (d) target classifier validation, which involved statistical analysis of the target classification performance evaluated by the area under the receiver operating characteristic curve (AUROC), sensitivity, specificity and accuracy. Additionally, a classifier was trained without UDA for comparison. Results: The target classifier trained with UDA achieved improved AUROC, sensitivity, specificity and accuracy for both knee OA phenotypes compared with the classifier trained without UDA. Conclusion: The proposed UDA approach improves the performance of automated knee OA phenotype classification for small target datasets by utilising a large, high-quality source dataset for training. The results successfully demonstrated the advantages of the UDA approach in classification on small datasets.
translated by 谷歌翻译
In a recent paper Wunderlich and Pehle introduced the EventProp algorithm that enables training spiking neural networks by gradient descent on exact gradients. In this paper we present extensions of EventProp to support a wider class of loss functions and an implementation in the GPU enhanced neuronal networks framework which exploits sparsity. The GPU acceleration allows us to test EventProp extensively on more challenging learning benchmarks. We find that EventProp performs well on some tasks but for others there are issues where learning is slow or fails entirely. Here, we analyse these issues in detail and discover that they relate to the use of the exact gradient of the loss function, which by its nature does not provide information about loss changes due to spike creation or spike deletion. Depending on the details of the task and loss function, descending the exact gradient with EventProp can lead to the deletion of important spikes and so to an inadvertent increase of the loss and decrease of classification accuracy and hence a failure to learn. In other situations the lack of knowledge about the benefits of creating additional spikes can lead to a lack of gradient flow into earlier layers, slowing down learning. We eventually present a first glimpse of a solution to these problems in the form of `loss shaping', where we introduce a suitable weighting function into an integral loss to increase gradient flow from the output layer towards earlier layers.
translated by 谷歌翻译
Artificial intelligence methods including deep neural networks (DNN) can provide rapid molecular classification of tumors from routine histology with accuracy that matches or exceeds human pathologists. Discerning how neural networks make their predictions remains a significant challenge, but explainability tools help provide insights into what models have learned when corresponding histologic features are poorly defined. Here, we present a method for improving explainability of DNN models using synthetic histology generated by a conditional generative adversarial network (cGAN). We show that cGANs generate high-quality synthetic histology images that can be leveraged for explaining DNN models trained to classify molecularly-subtyped tumors, exposing histologic features associated with molecular state. Fine-tuning synthetic histology through class and layer blending illustrates nuanced morphologic differences between tumor subtypes. Finally, we demonstrate the use of synthetic histology for augmenting pathologist-in-training education, showing that these intuitive visualizations can reinforce and improve understanding of histologic manifestations of tumor biology.
translated by 谷歌翻译
End-to-end deep neural networks (DNNs) have become state-of-the-art (SOTA) for solving inverse problems. Despite their outstanding performance, during deployment, such networks are sensitive to minor variations in the training pipeline and often fail to reconstruct small but important details, a feature critical in medical imaging, astronomy, or defence. Such instabilities in DNNs can be explained by the fact that they ignore the forward measurement model during deployment, and thus fail to enforce consistency between their output and the input measurements. To overcome this, we propose a framework that transforms any DNN for inverse problems into a measurement-consistent one. This is done by appending to it an implicit layer (or deep equilibrium network) designed to solve a model-based optimization problem. The implicit layer consists of a shallow learnable network that can be integrated into the end-to-end training. Experiments on single-image super-resolution show that the proposed framework leads to significant improvements in reconstruction quality and robustness over the SOTA DNNs.
translated by 谷歌翻译
The International Atomic Energy Agency (IAEA) stopping power database is a highly valued public resource compiling most of the experimental measurements published over nearly a century. The database-accessible to the global scientific community-is continuously updated and has been extensively employed in theoretical and experimental research for more than 30 years. This work aims to employ machine learning algorithms on the 2021 IAEA database to predict accurate electronic stopping power cross sections for any ion and target combination in a wide range of incident energies. Unsupervised machine learning methods are applied to clean the database in an automated manner. These techniques purge the data by removing suspicious outliers and old isolated values. A large portion of the remaining data is used to train a deep neural network, while the rest is set aside, constituting the test set. The present work considers collisional systems only with atomic targets. The first version of the ESPNN (electronic stopping power neural-network code), openly available to users, is shown to yield predicted values in excellent agreement with the experimental results of the test set.
translated by 谷歌翻译
在机器学习中,对神经网络集合(NNE)(NNE)引起了新的兴趣,从而从一组较小的模型(而不是从单个较大的模型)中获得了预测作为汇总的预测。在这里,我们展示了如何使用随机系统中稀有轨迹的技术来定义和训练NNE。我们根据模型参数的轨迹定义一个NNE,在简单的,离散的时间,扩散动力学下,并通过将这些轨迹偏向较小的时间整合损失来训练NNE,并由适当的计数领域控制,这些领域的作用是超参数。我们证明了该技术在一系列简单监督的学习任务上的生存能力。与更常规的基于梯度的方法相比,我们讨论了轨迹采样方法的潜在优势。
translated by 谷歌翻译
我们提出了一种新颖的方法,可以将3D人类动画放入3D场景中,同时保持动画中的任何人类场景相互作用。我们使用计算动画中最重要的网格的概念,以与场景进行交互,我们称之为“键框”。这些关键框架使我们能够更好地优化动画在场景中的位置,从而使动画中的互动(站立,铺设,坐着等)与场景的负担相匹配(例如,站在地板上或躺在床上)。我们将我们称为PAAK的方法与先前的方法进行了比较,包括POSA,Prox地面真理和运动合成方法,并通过感知研究突出了我们方法的好处。人类评估者更喜欢我们的PAAK方法,而不是Prox地面真相数据64.6 \%。此外,在直接比较中,与POSA相比,评估者比竞争方法比包括61.5%的竞争方法更喜欢PAAK。
translated by 谷歌翻译
我们介绍了IST和Unmabel对WMT 2022关于质量估计(QE)的共享任务的共同贡献。我们的团队参与了所有三个子任务:(i)句子和单词级质量预测;(ii)可解释的量化宽松;(iii)关键错误检测。对于所有任务,我们在彗星框架之上构建,将其与OpenKIWI的预测估计架构连接,并为其配备单词级序列标记器和解释提取器。我们的结果表明,在预处理过程中合并参考可以改善下游任务上多种语言对的性能,并且通过句子和单词级别的目标共同培训可以进一步提高。此外,将注意力和梯度信息结合在一起被证明是提取句子级量化量化宽松模型的良好解释的首要策略。总体而言,我们的意见书在几乎所有语言对的所有三个任务中都取得了最佳的结果。
translated by 谷歌翻译
关于使用ML模型的一个基本问题涉及其对提高决策透明度的预测的解释。尽管已经出现了几种可解释性方法,但已经确定了有关其解释可靠性的一些差距。例如,大多数方法都是不稳定的(这意味着它们在数据中提供了截然不同的解释),并且不能很好地应对无关的功能(即与标签无关的功能)。本文介绍了两种新的可解释性方法,即Varimp和Supclus,它们通过使用局部回归拟合的加权距离来克服这些问题,以考虑可变重要性。 Varimp生成了每个实例的解释,可以应用于具有更复杂关系的数据集,而Supclus解释了具有类似说明的实例集群,并且可以应用于可以找到群集的较简单数据集。我们将我们的方法与最先进的方法进行了比较,并表明它可以根据几个指标产生更好的解释,尤其是在具有无关特征的高维问题中,以及特征与目标之间的关系是非线性的。
translated by 谷歌翻译